A Rapidly Convergent Descent Method for Minimization

نویسندگان

  • Robert Michael Lewis
  • E. N. Atkinson
  • M. G. Doherty
  • R. S. Freeman
چکیده

27 information is used. A choice then has to be made as to which is the most eecient option. Acknowledgments. We are grateful to the referees for their useful comments. We thank Robert Michael Lewis for his valuable suggestions on how best to present this material, particularly the results given in x7. Mathematical models for the predictive value of early CA125 serum levels in epithelial ovarian carcinoma.Direct Search Methods on Parallel Machines 26 a direct search method seems to be in order. Tackling this problem, however, means that we will need to rethink the original implementation of our parallel multidirectional search schemes. To begin with, our current implementation is best suited for the case when all the function evaluations require approximately the same time to complete. Thus, there is a natural synchronization that allows us to implement the algorithm without either a controlling process or any concerns for load balancing. This will not always be the case when dealing with more diicult problems. Hence, we will need an asynchronous, task-queue-based implementation with a single controlling process. Another direction of research would be to extend the parallel multidirectional search algorithm to problems with constraints. We believe it is possible to extend the parallel algorithms, with only minor modiications, to problems with bounded variables. We are also interested in handling linear constraints. If we can handle bounded variables, it should be possible to transfer many of the ideas learned during the development of interior point methods to our simplex-based method for handling problems with linear constraints. There are several other ideas we would also like to pursue. Although we have a simple, fast algorithm to generate templates for the parallel multidirectional search schemes, it is possible that there are other, perhaps better, initialization schemes we could implement. One of the few pieces of information that the basic multidirectional search algorithm carries from iteration to iteration is the size of the step taken in the previous iteration|which determines the size of the step taken in the current iteration. If an expansion step was accepted, this would indicate that the simplex is still far from a solution. If the contraction step was accepted, then either the simplex is near a solution or it is trapped in a diicult region. If we allowed mixed strategies, i.e., diierent templates depending on the type of step accepted in the previous iteration, then it seems possible that we …

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Hybrid steepest-descent method with sequential and functional errors in Banach space

Let $X$ be a reflexive Banach space, $T:Xto X$ be a nonexpansive mapping with $C=Fix(T)neqemptyset$ and $F:Xto X$ be $delta$-strongly accretive and $lambda$- strictly pseudocotractive with $delta+lambda>1$. In this paper, we present modified hybrid steepest-descent methods, involving sequential errors and functional errors with functions admitting a center, which generate convergent sequences ...

متن کامل

Constrained Nonlinear Optimal Control via a Hybrid BA-SD

The non-convex behavior presented by nonlinear systems limits the application of classical optimization techniques to solve optimal control problems for these kinds of systems. This paper proposes a hybrid algorithm, namely BA-SD, by combining Bee algorithm (BA) with steepest descent (SD) method for numerically solving nonlinear optimal control (NOC) problems. The proposed algorithm includes th...

متن کامل

A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements

We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With O(rκn log n) random measurements of a positive semidefinite n×nmatrix of rank r and condition number κ, our method is guaranteed to converge linearly to the global optimum.

متن کامل

Stability of Convergent Continuous Descent Methods

We consider continuous descent methods for the minimization of convex functions defined on a general Banach space. In our previous work we showed that most of them (in the sense of Baire category) converged. In the present paper we show that convergent continuous descent methods are stable under small perturbations.

متن کامل

Block Bfgs Methods

We introduce a quasi-Newton method with block updates called Block BFGS. We show that this method, performed with inexact Armijo-Wolfe line searches, converges globally and superlinearly under the same convexity assumptions as BFGS. We also show that Block BFGS is globally convergent to a stationary point when applied to non-convex functions with bounded Hessian, and discuss other modifications...

متن کامل

Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite Sum Structure

Stochastic optimization algorithms with variance reduction have proven successful for minimizing large finite sums of functions. However, in the context of empirical risk minimization, it is often helpful to augment the training set by considering random perturbations of input examples. In this case, the objective is no longer a finite sum, and the main candidate for optimization is the stochas...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1991